Goto

Collaborating Authors

 ai company


Three things in AI to watch, according to a Nobel-winning economist

MIT Technology Review

Daron Acemoglu is more cautious than most about predictions of a jobs apocalypse. A few months before he was awarded the Nobel Prize in economics in 2024, Daron Acemoglu published a paper that earned him few fans in Silicon Valley. Contrary to what Big Tech CEOs had been promising--an overhaul of all white-collar work--Acemoglu estimated that AI would give only a small boost to US productivity and would not obviate the need for human work. It's okay at automating certain tasks, he wrote, but some jobs will be perfectly fine. Two years later, Acemoglu's measured take has not caught on. Chatter about an AI jobs apocalypse pops up everywhere from Senator Bernie Sanders's rallies to conversations I overhear in line at the grocery store.


Former OpenAI board member says Elon Musk offered her sperm donations

BBC News

A former OpenAI board member has explained how her unconventional personal relationship with Elon Musk evolved into having four of his children. Shivon Zilis testified in a federal courtroom in Oakland, California for hours on Wednesday as part of Musk's lawsuit trying to reverse OpenAI's change to a for-profit company. The focus of Zilis's appearance was her direct involvement in early talks with Musk around the company becoming a for-profit, but also how she worked for and became involved with Musk as she advised OpenAI. I still really wanted to be a mum and Elon made the offer around that time and I accepted, she said, explaining Musk in 2020 had offered to donate sperm. He was encouraging everyone around him at that time to have kids and he'd noticed I did not.


Elon Musk Seemingly Admits xAI Has Used OpenAI's Models to Train Its Own

WIRED

Elon Musk Seemingly Admits xAI Has Used OpenAI's Models to Train Its Own While answering questions under oath, Musk argued it's standard practice for AI labs to use their competitors' models. While testifying on Thursday in federal court, Elon Musk seemed to indicate that his AI lab may have used OpenAI's models to train xAI's own. He touched upon the topic while sitting on the witness stand answering cross-examination questions from an OpenAI attorney amid his ongoing legal battle against the ChatGPT-maker . Do you know what distillation is? It means to use one AI model to train another AI model.


SpaceX and Cursor strike partnership that might end in a 60 billion acquisition

Engadget

The X and xAI owner is now working closely together with the maker of the AI coding tool. The xAI and SpaceX logos appear on a smartphone screen placed on a reflective surface onto which an abstract black and blue illustration is projected. SpaceX and AI company Cursor have struck a new partnership that could see the owner of X buy the AI company for $60 billion later this year. SpaceXAI and @cursor_ai are now working closely together to create the world's best coding and knowledge work AI, SpaceX wrote in a post on X. SpaceXAI and @cursor_ai are now working closely together to create the world's best coding and knowledge work AI. According to SpaceX, the deal allows for it to either invest $10 billion into the company known for its AI coding tool, or acquire it entirely later this year for $60 billion.


Claude Mythos Is Everyone's Problem

The Atlantic - Technology

What happens when AI can hack everything? For the past several weeks, Anthropic says it secretly possessed a tool potentially capable of commandeering most computer servers in the world. This is a bot that, if unleashed, might be able to hack into banks, exfiltrate state secrets, and fry crucial infrastructure. Already, according to the company, this AI model has identified thousands of major cybersecurity vulnerabilities--including exploits in every single major operating system and browser. This level of cyberattack is typically available only to elite, state-sponsored hacking cells in a very small number of countries including China, Russia, and the United States.


The Download: The Pentagon's new AI plans, and next-gen nuclear reactors

MIT Technology Review

The Download: The Pentagon's new AI plans, and next-gen nuclear reactors Plus: The OpenClaw frenzy has led to a new Nvidia product. The Pentagon plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, MIT Technology Review has learned. AI models like Anthropic's Claude are already used to answer questions in classified settings, including for analyzing targets in Iran. But allowing them to train on and learn from classified data is a major new development that presents unique security risks. It would also bring AI firms closer to classified data than ever before. What do new nuclear reactors mean for waste?


The Pentagon is planning for AI companies to train on classified data, defense official says

MIT Technology Review

The generative AI models used in classified environments can answer questions but don't currently learn from the data they see. The Pentagon is discussing plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, has learned. AI models like Anthropic's Claude are already used to answer questions in classified settings; applications include analyzing targets in Iran. But allowing models to train on and learn from classified data would be a new development that presents unique security risks. It would mean sensitive intelligence like surveillance reports or battlefield assessments could become embedded into the models themselves, and it would bring AI firms into closer contact with classified data than before. Training versions of AI models on classified data is expected to make them more accurate and effective in certain tasks, according to a US defense official who spoke on background with .


How AI firm Anthropic wound up in the Pentagon's crosshairs

The Guardian

This week has brought more chaos in the feud between the Pentagon and Anthropic. This week has brought more chaos in the feud between the Pentagon and Anthropic. How AI firm Anthropic wound up in the Pentagon's crosshairs U ntil recently, Anthropic was one of the quieter names in the artificial intelligence boom. Despite being valued at about $350bn, it rarely generated the flashy headlines or public backlash associated with Sam Altman's OpenAI or Elon Musk's xAI. Its CEO and co-founder Dario Amodei was an industry fixture but hardly a household name outside of Silicon Valley, and its chatbot Claude lagged in popularity behind ChatGPT.


AI Safety Meets the War Machine

WIRED

Anthropic doesn't want its AI used in autonomous weapons or government surveillance. Those carve-outs could cost it a major military contract. When Anthropic last year became the first major AI company cleared by the US government for classified use--including military applications--the news didn't make a major splash. But this week a second development hit like a cannonball: The Pentagon is reconsidering its relationship with the company, including a $200 million contract, ostensibly because the safety-conscious AI firm objects to participating in certain deadly operations. The so-called Department of War might even designate Anthropic as a "supply chain risk," a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic's AI in their defense work.


'We May Have a Crisis on Our Hands': The Unregulated Rise of Emotionally Intelligent AI

TIME - Tech

'We May Have a Crisis on Our Hands': The Unregulated Rise of Emotionally Intelligent AI Pillay is an editorial fellow at TIME. Pillay is an editorial fellow at TIME. At least once a month, two-thirds of people who regularly use AI turn to their bots for advice on sensitive personal issues and emotional support. Many people now report trusting their chatbots more than their elected representatives, civil servants, faith leaders--and the companies building AI. That's according to data from 70 countries, gathered by the Collective Intelligence Project (CIP).